sabine hauert
AIhub coffee corner: Bad practice in the publication world
This month we tackle the topic of bad practice in the sphere of publication. Joining the conversation this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), and Sarit Kraus (Bar-Ilan University). Sabine Hauert: Today's topic is bad practice in the publication world. For example, people trying to cheat the review system, paper mills. What bad behaviors have you seen, and is it really a problem? Tom Dietterich: Well, I can talk about it from an arXiv point of view.
- North America > United States > Virginia (0.25)
- North America > United States > Oregon (0.25)
Distributed Spatial Awareness for Robot Swarms
Building a distributed spatial awareness within a swarm of locally sensing and communicating robots enables new swarm algorithms. We use local observations by robots of each other and Gaussian Belief Propagation message passing combined with continuous swarm movement to build a global and distributed swarm-centric frame of reference. With low bandwidth and computation requirements, this shared reference frame allows new swarm algorithms. We characterise the system in simulation and demonstrate two example algorithms.
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Bristol (0.04)
AIhub coffee corner: Is it the end of GenAI hype?
There has been a string of articles recently about the end of generative AI hype. Our experts consider whether or not the bubble has burst. Joining the conversation this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Michael Littman (Brown University), and Marija Slavkovik (University of Bergen). Sabine Hauert: There have been a number of recent articles in the mainstream media talking about the fact that AI has not made any money, and that it might be all hype, or a bubble. Marija Slavkovik: There is this article by Cory Doctorow which asks what kind of bubble AI is. I really like his take that a lot of bubbles come and go; some of them leave us something useful and some of them just generate something for a brief moment in time, like excellent revenue for the investment bankers for example.
AIhub coffee corner: Open vs closed science
This month, we consider the debate around open vs closed science. Joining the conversation this time are: Joydeep Biswas (The University of Texas at Austin), Sanmay Das (George Mason University), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol) and Sarit Kraus (Bar-Ilan University). Sabine Hauert: There have been many discussions online recently about the topic of open vs closed science. We've seen a lot of people advocating for open AI (not the company, but being open generally, just to clarify!). I was at an event recently in preparation for the AI summit in the UK.
- North America > United States > Texas > Travis County > Austin (0.25)
- North America > United States > Oregon (0.25)
- Europe > United Kingdom (0.25)
AIhub coffee corner: AI risks, pause letters and the ensuing discourse
This month, in light of the recent prominent discussions relating to perceived AI risks, we consider the pause letters and risk statements, the debate around existential threats, and how this discourse could impact the field and public perceptions. Joining the discussion this time are: Sanmay Das (George Mason University), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Anna Tahovská (Czech Technical University), and Oskar von Stryk (Technische Universität Darmstadt). Sabine Hauert: In today's discussion we're going to talk about potential AI risks and the recent discourse around existential threats. Does anyone have any hot reactions? How do you feel about the discourse of existential threat? Tom Dietterich: I agree with Emily Bender and a lot of the critics that it's a distraction and a diversion from thinking about the more immediate threats.
- North America > United States > Oregon (0.24)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.24)
Robot Talk: Episode Thirty-Four: Sabine Hauert
This week, Claire chatted to Sabine Hauert from the University of Bristol all about swarm robotics, nanorobots, and environmental monitoring. is Associate Professor of Swarm Engineering at University of Bristol. She leads a team of 20 researchers working on making swarms for people, and across scales, from nanorobots for cancer treatment, to larger robots for environmental monitoring, or logistics. Previously she worked at MIT and EPFL. She is President and Executive Trustee of non-profits robohub.org and aihub.org, which connect the robotics and AI communities to the public.
AIhub coffee corner: Large language models for scientific writing
The recent launches of two large language models, ChatGPT and Galactica, have led to much interest and controversy amongst the AI community, and beyond. These models, and in particular their potential use for writing scientific articles (and essays), provided the inspiration for this month's discussion. Joining the discussion this time are: Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), and Lucy Smith (AIhub). Sabine Hauert: Has anyone had a chance to use any of these new models yet? Sarit Kraus: During the summer I played with the previous version of GPT. Have you tried the latest version, Michael?
AIhub coffee corner: Is AI-generated art devaluing the work of artists?
This month, we tackle the topic of AI-generated art and what this means for artists. Joining the discussion this time are: Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), Lucy Smith (AIhub), Anna Tahovská (Czech Technical University), and Oskar von Stryk (Technische Universität Darmstadt). Sabine Hauert: This month our topic is AI-generated art. There are lots of questions relating to the value of the art that's generated by these AI systems, whether artists should be working with these tools, and whether that devalues the work that they do. Lucy Smith: I was interested in this case, whereby Shutterstock is now going to sell images created exclusively by OpenAI's DALL-E 2. They say that they're going to compensate the artists whose work they used in training the model, but I don't know how they are going to work out how much each training image has contributed to each created image that they sell.
- North America > United States > Oregon (0.25)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.25)
AIhub coffee corner: can AI make humans better?
This month, we ask if AI can make humans better. Joining the discussion this time are: Joe Daly (AIhub and University of Bristol), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Sarit Kraus (Bar-Ilan University), Michael Littman (Brown University), Lucy Smith (AIhub) and Oskar von Stryk (Technische Universität Darmstadt). Joe Daly: I recently saw this Twitter thread, about how AI has made human players better at the game of Go, then this article about the game of bridge, and more generally about AI's influence on us. People were actually discussing how AI can make us better at stuff, and how we can learn things from AI. What are people's thoughts on that?
- North America > United States > Oregon (0.25)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.25)